Research
Security News
Quasar RAT Disguised as an npm Package for Detecting Vulnerabilities in Ethereum Smart Contracts
Socket researchers uncover a malicious npm package posing as a tool for detecting vulnerabilities in Etherium smart contracts.
The yauzl npm package is a library for unzipping ZIP files in Node.js. It provides a low-level interface for reading and extracting ZIP file contents, allowing developers to handle large files and read entries without decompressing the entire file to disk.
Opening a ZIP file
This code demonstrates how to open a ZIP file for reading. The 'lazyEntries' option allows reading entries one at a time.
const yauzl = require('yauzl');
yauzl.open('path/to/file.zip', {lazyEntries: true}, function(err, zipfile) {
if (err) throw err;
zipfile.readEntry();
});
Reading entries from a ZIP file
This code shows how to read entries from an open ZIP file. It handles directories and files differently, reading the next entry after each file is processed.
zipfile.on('entry', function(entry) {
if (/\/.test(entry.fileName)) {
// Directory file names end with '/'.
zipfile.readEntry();
} else {
// File entry
zipfile.openReadStream(entry, function(err, readStream) {
if (err) throw err;
// Handle the read stream, possibly piping it somewhere
readStream.on('end', function() {
zipfile.readEntry();
});
});
}
});
Extracting a file entry
This code snippet demonstrates how to extract a file entry from a ZIP file. It creates a read stream for the entry and pipes it to a write stream.
zipfile.openReadStream(entry, function(err, readStream) {
if (err) throw err;
// Create a write stream to extract the file to
var writeStream = fs.createWriteStream('output/path');
readStream.pipe(writeStream);
});
adm-zip is a JavaScript implementation for zip data compression for NodeJS. It provides functionalities to read and write zip files, similar to yauzl, but also includes zip file creation which yauzl does not.
JSZip is a library for creating, reading, and editing .zip files with JavaScript, with a lovely and simple API. It works in both Node.js and browser environments, whereas yauzl is designed specifically for Node.js.
unzipper is a decompression library for Node that supports streaming, promises, and piping for zip files. It is an alternative to yauzl with a slightly different API and additional features like parsing zip file contents without extraction.
yet another unzip library for node. For zipping, see yazl.
Design principles:
validateFileName()
.var yauzl = require("yauzl");
yauzl.open("path/to/file.zip", {lazyEntries: true}, function(err, zipfile) {
if (err) throw err;
zipfile.readEntry();
zipfile.on("entry", function(entry) {
if (/\/$/.test(entry.fileName)) {
// Directory file names end with '/'.
// Note that entries for directories themselves are optional.
// An entry's fileName implicitly requires its parent directories to exist.
zipfile.readEntry();
} else {
// file entry
zipfile.openReadStream(entry, function(err, readStream) {
if (err) throw err;
readStream.on("end", function() {
zipfile.readEntry();
});
readStream.pipe(somewhere);
});
}
});
});
See also examples/
for more usage examples.
The default for every optional callback
parameter is:
function defaultCallback(err) {
if (err) throw err;
}
Calls fs.open(path, "r")
and reads the fd
effectively the same as fromFd()
would.
options
may be omitted or null
. The defaults are {autoClose: true, lazyEntries: false, decodeStrings: true, validateEntrySizes: true, strictFileNames: false}
.
autoClose
is effectively equivalent to:
zipfile.once("end", function() {
zipfile.close();
});
lazyEntries
indicates that entries should be read only when readEntry()
is called.
If lazyEntries
is false
, entry
events will be emitted as fast as possible to allow pipe()
ing
file data from all entries in parallel.
This is not recommended, as it can lead to out of control memory usage for zip files with many entries.
See issue #22.
If lazyEntries
is true
, an entry
or end
event will be emitted in response to each call to readEntry()
.
This allows processing of one entry at a time, and will keep memory usage under control for zip files with many entries.
decodeStrings
is the default and causes yauzl to decode strings with CP437
or UTF-8
as required by the spec.
The exact effects of turning this option off are:
zipfile.comment
, entry.fileName
, and entry.fileComment
will be Buffer
objects instead of String
s.extraFields
.validateFileName()
.validateEntrySizes
is the default and ensures that an entry's reported uncompressed size matches its actual uncompressed size.
This check happens as early as possible, which is either before emitting each "entry"
event (for entries with no compression),
or during the readStream
piping after calling openReadStream()
.
See openReadStream()
for more information on defending against zip bomb attacks.
When strictFileNames
is false
(the default) and decodeStrings
is true
,
all backslash (\
) characters in each entry.fileName
are replaced with forward slashes (/
).
The spec forbids file names with backslashes,
but Microsoft's System.IO.Compression.ZipFile
class in .NET versions 4.5.0 until 4.6.1
creates non-conformant zipfiles with backslashes in file names.
strictFileNames
is false
by default so that clients can read these
non-conformant zipfiles without knowing about this Microsoft-specific bug.
When strictFileNames
is true
and decodeStrings
is true
,
entries with backslashes in their file names will result in an error. See validateFileName()
.
When decodeStrings
is false
, strictFileNames
has no effect.
The callback
is given the arguments (err, zipfile)
.
An err
is provided if the End of Central Directory Record cannot be found, or if its metadata appears malformed.
This kind of error usually indicates that this is not a zip file.
Otherwise, zipfile
is an instance of ZipFile
.
Reads from the fd, which is presumed to be an open .zip file. Note that random access is required by the zip file specification, so the fd cannot be an open socket or any other fd that does not support random access.
options
may be omitted or null
. The defaults are {autoClose: false, lazyEntries: false, decodeStrings: true, validateEntrySizes: true, strictFileNames: false}
.
See open()
for the meaning of the options and callback.
Like fromFd()
, but reads from a RAM buffer instead of an open file.
buffer
is a Buffer
.
If a ZipFile
is acquired from this method,
it will never emit the close
event,
and calling close()
is not necessary.
options
may be omitted or null
. The defaults are {lazyEntries: false, decodeStrings: true, validateEntrySizes: true, strictFileNames: false}
.
See open()
for the meaning of the options and callback.
The autoClose
option is ignored for this method.
This method of reading a zip file allows clients to implement their own back-end file system. For example, a client might translate read calls into network requests.
The reader
parameter must be of a type that is a subclass of
RandomAccessReader that implements the required methods.
The totalSize
is a Number and indicates the total file size of the zip file.
options
may be omitted or null
. The defaults are {autoClose: true, lazyEntries: false, decodeStrings: true, validateEntrySizes: true, strictFileNames: false}
.
See open()
for the meaning of the options and callback.
Deprecated. Since yauzl 3.2.0, it is highly recommended to call entry.getLastModDate()
instead of this function due to enhanced support for reading third-party extra fields.
If you ever have a use case for calling this function directly please
open an issue against yauzl
requesting that this function be properly supported again.
This function only remains exported in order to maintain compatibility with older version of yauzl. It will be removed in yauzl 4.0.0 unless someone asks for it to remain supported.
If you are setting decodeStrings
to false
, then this function can be used to decode the file name yourself.
This function is effectively used internally by yauzl to populate the entry.fileName
field when decodeStrings
is true
.
WARNING: This method of getting the file name bypasses the security checks in validateFileName()
.
You should call that function yourself to be sure to guard against malicious file paths.
generalPurposeBitFlag
can be found on an Entry
or LocalFileHeader
.
Only General Purpose Bit 11 is used, and only when an Info-ZIP Unicode Path Extra Field cannot be found in extraFields
.
fileNameBuffer
is a Buffer
representing the file name field of the entry.
This is entry.fileNameRaw
or localFileHeader.fileName
.
extraFields
is the parsed extra fields array from entry.extraFields
or parseExtraFields()
.
strictFileNames
is a boolean, the same as the option of the same name in open()
.
When false
, backslash characters (\
) will be replaced with forward slash characters (/
).
This function always returns a string, although it may not be a valid file name.
See validateFileName()
.
Returns null
or a String
error message depending on the validity of fileName
.
If fileName
starts with "/"
or /[A-Za-z]:\//
or if it contains ".."
path segments or "\\"
,
this function returns an error message appropriate for use like this:
var errorMessage = yauzl.validateFileName(fileName);
if (errorMessage != null) throw new Error(errorMessage);
This function is automatically run for each entry, as long as decodeStrings
is true
.
See open()
, strictFileNames
, and Event: "entry"
for more information.
This function is used internally by yauzl to compute entry.extraFields
.
It is exported in case you want to call it on localFileHeader.extraField
.
extraFieldBuffer
is a Buffer
, such as localFileHeader.extraField
.
Returns an Array
with each item in the form {id: id, data: data}
,
where id
is a Number
and data
is a Buffer
.
Throws an Error
if the data encodes an item with a size that exceeds the bounds of the buffer.
You may want to surround calls to this function with try { ... } catch (err) { ... }
to handle the error.
The constructor for the class is not part of the public API.
Use open()
, fromFd()
, fromBuffer()
, or fromRandomAccessReader()
instead.
Callback gets (entry)
, which is an Entry
.
See open()
and readEntry()
for when this event is emitted.
If decodeStrings
is true
, entries emitted via this event have already passed file name validation.
See validateFileName()
and open()
for more information.
If validateEntrySizes
is true
and this entry's compressionMethod
is 0
(stored without compression),
this entry has already passed entry size validation.
See open()
for more information.
Emitted after the last entry
event has been emitted.
See open()
and readEntry()
for more info on when this event is emitted.
Emitted after the fd is actually closed.
This is after calling close()
(or after the end
event when autoClose
is true
),
and after all stream pipelines created from openReadStream()
have finished reading data from the fd.
If this ZipFile
was acquired from fromRandomAccessReader()
,
the "fd" in the previous paragraph refers to the RandomAccessReader
implemented by the client.
If this ZipFile
was acquired from fromBuffer()
, this event is never emitted.
Emitted in the case of errors with reading the zip file.
(Note that other errors can be emitted from the streams created from openReadStream()
as well.)
After this event has been emitted, no further entry
, end
, or error
events will be emitted,
but the close
event may still be emitted.
Causes this ZipFile
to emit an entry
or end
event (or an error
event).
This method must only be called when this ZipFile
was created with the lazyEntries
option set to true
(see open()
).
When this ZipFile
was created with the lazyEntries
option set to true
,
entry
and end
events are only ever emitted in response to this method call.
The event that is emitted in response to this method will not be emitted until after this method has returned, so it is safe to call this method before attaching event listeners.
After calling this method, calling this method again before the response event has been emitted will cause undefined behavior.
Calling this method after the end
event has been emitted will cause undefined behavior.
Calling this method after calling close()
will cause undefined behavior.
entry
must be an Entry
object from this ZipFile
.
callback
gets (err, readStream)
, where readStream
is a Readable Stream
that provides the file data for this entry.
If this zipfile is already closed (see close()
), the callback
will receive an err
.
options
may be omitted or null
, and has the following defaults:
{
decompress: entry.isCompressed() ? true : null,
decrypt: null,
start: 0, // actually the default is null, see below
end: entry.compressedSize, // actually the default is null, see below
}
If the entry is compressed (with a supported compression method),
and the decompress
option is true
(or omitted),
the read stream provides the decompressed data.
Omitting the decompress
option is what most clients should do.
The decompress
option must be null
(or omitted) when the entry is not compressed (see isCompressed()
),
and either true
(or omitted) or false
when the entry is compressed.
Specifying decompress: false
for a compressed entry causes the read stream
to provide the raw compressed file data without going through a zlib inflate transform.
If the entry is encrypted (see isEncrypted()
), clients may want to avoid calling openReadStream()
on the entry entirely.
Alternatively, clients may call openReadStream()
for encrypted entries and specify decrypt: false
.
If the entry is also compressed, clients must also specify decompress: false
.
Specifying decrypt: false
for an encrypted entry causes the read stream to provide the raw, still-encrypted file data.
(This data includes the 12-byte header described in the spec.)
The decrypt
option must be null
(or omitted) for non-encrypted entries, and false
for encrypted entries.
Omitting the decrypt
option (or specifying it as null
) for an encrypted entry
will result in the callback
receiving an err
.
This default behavior is so that clients not accounting for encrypted files aren't surprised by bogus file data.
The start
(inclusive) and end
(exclusive) options are byte offsets into this entry's file data,
and can be used to obtain part of an entry's file data rather than the whole thing.
If either of these options are specified and non-null
,
then the above options must be used to obain the file's raw data.
Specifying {start: 0, end: entry.compressedSize}
will result in the complete file,
which is effectively the default values for these options,
but note that unlike omitting the options, when you specify start
or end
as any non-null
value,
the above requirement is still enforced that you must also pass the appropriate options to get the file's raw data.
It's possible for the readStream
provided to the callback
to emit errors for several reasons.
For example, if zlib cannot decompress the data, the zlib error will be emitted from the readStream
.
Two more error cases (when validateEntrySizes
is true
) are if the decompressed data has too many
or too few actual bytes compared to the reported byte count from the entry's uncompressedSize
field.
yauzl notices this false information and emits an error from the readStream
after some number of bytes have already been piped through the stream.
This check allows clients to trust the uncompressedSize
field in Entry
objects.
Guarding against zip bomb attacks can be accomplished by
doing some heuristic checks on the size metadata and then watching out for the above errors.
Such heuristics are outside the scope of this library,
but enforcing the uncompressedSize
is implemented here as a security feature.
It is possible to destroy the readStream
before it has piped all of its data.
To do this, call readStream.destroy()
.
You must unpipe()
the readStream
from any destination before calling readStream.destroy()
.
If this zipfile was created using fromRandomAccessReader()
, the RandomAccessReader
implementation
must provide readable streams that implement a ._destroy()
method according to
https://nodejs.org/api/stream.html#writable_destroyerr-callback (see randomAccessReader._readStreamForRange()
)
in order for calls to readStream.destroy()
to work in this context.
This is a low-level function you probably don't need to call.
The intended use case is either preparing to call openReadStreamLowLevel()
or simply examining the content of the local file header out of curiosity or for debugging zip file structure issues.
entry
is an entry obtained from Event: "entry"
.
An entry
in this library is a file's metadata from a Central Directory Header,
and this function gives the corresponding redundant data in a Local File Header.
options
may be omitted or null
, and has the following defaults:
{
minimal: false,
}
If minimal
is false
(or omitted or null
), the callback receives a full LocalFileHeader
.
If minimal
is true
, the callback receives an object with a single property and no prototype {fileDataStart: fileDataStart}
.
For typical zipfile reading usecases, this field is the only one you need,
and yauzl internally effectively uses the {minimal: true}
option as part of openReadStream()
.
The callback
receives (err, localFileHeaderOrAnObjectWithJustOneFieldDependingOnTheMinimalOption)
,
where the type of the second parameter is described in the above discussion of the minimal
option.
This is a low-level function available for advanced use cases. You probably want openReadStream()
instead.
The intended use case for this function is calling readEntry()
and readLocalFileHeader()
with {minimal: true}
first,
and then opening the read stream at a later time, possibly after closing and reopening the entire zipfile,
possibly even in a different process.
The parameters are all integers and booleans, which are friendly to serialization.
fileDataStart
- from localFileHeader.fileDataStart
compressedSize
- from entry.compressedSize
relativeStart
- the resolved value of options.start
from openReadStream()
. Must be a non-negative integer, not null
. Typically 0
to start at the beginning of the data.relativeEnd
- the resolved value of options.end
from openReadStream()
. Must be a non-negative integer, not null
. Typically entry.compressedSize
to include all the data.decompress
- boolean indicating whether the data should be piped through a zlib inflate stream.uncompressedSize
- from entry.uncompressedSize
. Only used when validateEntrySizes
is true
. If validateEntrySizes
is false
, this value is ignored, but must still be present, not omitted, in the arguments; you have to give it some value, even if it's null
.callback
- receives (err, readStream)
, the same as for openReadStream()
This low-level function does not read any metadata from the underlying storage before opening the read stream.
This is both a performance feature and a safety hazard.
None of the integer parameters are bounds checked.
None of the validation from openReadStream()
with respect to compression and encryption is done here either.
Only the bounds checks from validateEntrySizes
are done, because that is part of processing the stream data.
Causes all future calls to openReadStream()
to fail,
and closes the fd, if any, after all streams created by openReadStream()
have emitted their end
events.
If the autoClose
option is set to true
(see open()
),
this function will be called automatically effectively in response to this object's end
event.
If the lazyEntries
option is set to false
(see open()
) and this object's end
event has not been emitted yet,
this function causes undefined behavior.
If the lazyEntries
option is set to true
,
you can call this function instead of calling readEntry()
to abort reading the entries of a zipfile.
It is safe to call this function multiple times; after the first call, successive calls have no effect.
This includes situations where the autoClose
option effectively calls this function for you.
If close()
is never called, then the zipfile is "kept open".
For zipfiles created with fromFd()
, this will leave the fd
open, which may be desirable.
For zipfiles created with open()
, this will leave the underlying fd
open, thereby "leaking" it, which is probably undesirable.
For zipfiles created with fromRandomAccessReader()
, the reader's close()
method will never be called.
For zipfiles created with fromBuffer()
, the close()
function has no effect whether called or not.
Regardless of how this ZipFile
was created, there are no resources other than those listed above that require cleanup from this function.
This means it may be desirable to never call close()
in some usecases.
Boolean
. true
until close()
is called; then it's false
.
Number
. Total number of central directory records.
String
. Always decoded with CP437
per the spec.
If decodeStrings
is false
(see open()
), this field is the undecoded Buffer
instead of a decoded String
.
Objects of this class represent Central Directory Records. Refer to the zipfile specification for more details about these fields.
These fields are of type Number
:
versionMadeBy
versionNeededToExtract
generalPurposeBitFlag
compressionMethod
lastModFileTime
(MS-DOS format, see getLastModDate()
)lastModFileDate
(MS-DOS format, see getLastModDate()
)crc32
compressedSize
uncompressedSize
fileNameLength
(in bytes)extraFieldLength
(in bytes)fileCommentLength
(in bytes)internalFileAttributes
externalFileAttributes
relativeOffsetOfLocalHeader
These fields are of type Buffer
, and represent variable-length bytes before being processed:
fileNameRaw
extraFieldRaw
fileCommentRaw
There are additional fields described below: fileName
, extraFields
, fileComment
.
These are the processed versions of the *Raw
fields listed above. See their own sections below.
(Note the inconsistency in pluralization of "field" vs "fields" in extraField
, extraFields
, and extraFieldRaw
.
Sorry about that.)
The new Entry()
constructor is available for clients to call, but it's usually not useful.
The constructor takes no parameters and does nothing; no fields will exist.
String
.
Following the spec, the bytes for the file name are decoded with
UTF-8
if generalPurposeBitFlag & 0x800
, otherwise with CP437
.
Alternatively, this field may be populated from the Info-ZIP Unicode Path Extra Field
(see extraFields
).
This field is automatically validated by validateFileName()
before yauzl emits an "entry" event.
If this field would contain unsafe characters, yauzl emits an error instead of an entry.
If decodeStrings
is false
(see open()
), this field is the undecoded Buffer
instead of a decoded String
.
Therefore, generalPurposeBitFlag
and any Info-ZIP Unicode Path Extra Field are ignored.
Furthermore, no automatic file name validation is performed for this file name.
Array
with each item in the form {id: id, data: data}
,
where id
is a Number
and data
is a Buffer
.
This library looks for and reads the ZIP64 Extended Information Extra Field (0x0001) in order to support ZIP64 format zip files.
This library also looks for and reads the Info-ZIP Unicode Path Extra Field (0x7075)
in order to support some zipfiles that use it instead of General Purpose Bit 11
to convey UTF-8
file names.
When the field is identified and verified to be reliable (see the zipfile spec),
the file name in this field is stored in the fileName
property,
and the file name in the central directory record for this entry is ignored.
Note that when decodeStrings
is false, all Info-ZIP Unicode Path Extra Fields are ignored.
None of the other fields are considered significant by this library.
Fields that this library reads are left unaltered in the extraFields
array.
String
decoded with the charset indicated by generalPurposeBitFlag & 0x800
as with the fileName
.
(The Info-ZIP Unicode Path Extra Field has no effect on the charset used for this field.)
If decodeStrings
is false
(see open()
), this field is the undecoded Buffer
instead of a decoded String
.
Prior to yauzl version 2.7.0, this field was erroneously documented as comment
instead of fileComment
.
For compatibility with any code that uses the field name comment
,
yauzl creates an alias field named comment
which is identical to fileComment
.
Returns the modification time of the file as a JavaScript Date
object.
The timezone situation is a mess; read on to learn more.
Due to the zip file specification having lackluster support for specifying timestamps natively, there are several third-party extensions that add better support. yauzl supports these encodings:
0x5455
aka "UT"
): signed 32-bit seconds since 1970-01-01 00:00:00Z
, which supports the years 1901-2038 (partially inclusive) with 1-second precision. The value is timezone agnostic, i.e. always UTC.0x000a
): 64-bit signed 100-nanoseconds since 1601-01-01 00:00:00Z
, which supports the approximate years 20,000BCE-20,000CE with precision rounded to 1-millisecond (due to the JavaScript Date
type). The value is timezone agnostic, i.e. always UTC.lastModFileDate
and lastModFileTime
: supports the years 1980-2108 (inclusive) with 2-second precision. Timezone is interpreted either as the local timezone or UTC depending on the timezone
option documented below.If both the InfoZIP "universal timestamp" and NTFS extended fields are found, yauzl uses one of them, but which one is unspecified.
If neither are found, yauzl falls back to the built-in DOS lastModFileDate
and lastModFileTime
.
Every possible bit pattern of every encoding can be represented by a JavaScript Date
object,
meaning this function cannot fail (barring parameter validation), and will never return an Invalid Date
object.
options
may be omitted or null
, and has the following defaults:
{
timezone: "local", // or "UTC"
forceDosFormat: false,
}
Set forceDosFormat
to true
(and do not set timezone
) to enable pre-yauzl 3.2.0 behavior
where the InfoZIP "universal timestamp" and NTFS extended fields are ignored.
The timezone
option is only used in the DOS fallback.
If timezone
is omitted, null
or "local"
, the lastModFileDate
and lastModFileTime
are interpreted in the system's current timezone (using new Date(year, ...)
).
If timezone
is "UTC"
, the interpretation is in UTC+00:00 (using new Date(Date.UTC(year, ...))
).
The JavaScript Date
object, has several inherent limitations surrounding timezones.
There is an ECMAScript proposal to add better timezone support to JavaScript called the Temporal
API.
Last I checked, it was at stage 3. https://github.com/tc39/proposal-temporal
Once that new API is available and stable, better timezone handling should be possible here somehow.
If you notice that the new API has become widely available, please open a feature request against this library to add support for it.
Returns is this entry encrypted with "Traditional Encryption". Effectively implemented as:
return (this.generalPurposeBitFlag & 0x1) !== 0;
See openReadStream()
for the implications of this value.
Note that "Strong Encryption" is not supported, and will result in an "error"
event emitted from the ZipFile
.
Effectively implemented as:
return this.compressionMethod === 8;
See openReadStream()
for the implications of this value.
This is a trivial class that has no methods and only the following properties.
The constructor is available to call, but it doesn't do anything.
See readLocalFileHeader()
.
See the zipfile spec for what these fields mean.
fileDataStart
- Number
: inferred from fileNameLength
, extraFieldLength
, and this struct's position in the zipfile.versionNeededToExtract
- Number
generalPurposeBitFlag
- Number
compressionMethod
- Number
lastModFileTime
- Number
lastModFileDate
- Number
crc32
- Number
compressedSize
- Number
uncompressedSize
- Number
fileNameLength
- Number
extraFieldLength
- Number
fileName
- Buffer
extraField
- Buffer
Note that unlike Class: Entry
, the fileName
and extraField
are completely unprocessed.
This notably lacks Unicode and ZIP64 handling as well as any kind of safety validation on the file name.
See also parseExtraFields()
.
Also note that if your object is missing some of these fields,
make sure to read the docs on the minimal
option in readLocalFileHeader()
.
This class is meant to be subclassed by clients and instantiated for the fromRandomAccessReader()
function.
An example implementation can be found in test/test.js
.
Subclasses must implement this method.
start
and end
are Numbers and indicate byte offsets from the start of the file.
end
is exclusive, so _readStreamForRange(0x1000, 0x2000)
would indicate to read 0x1000
bytes.
end - start
will always be at least 1
.
This method should return a readable stream which will be pipe()
ed into another stream.
It is expected that the readable stream will provide data in several chunks if necessary.
If the readable stream provides too many or too few bytes, an error will be emitted.
(Note that validateEntrySizes
has no effect on this check,
because this is a low-level API that should behave correctly regardless of the contents of the file.)
Any errors emitted on the readable stream will be handled and re-emitted on the client-visible stream
(returned from zipfile.openReadStream()
) or provided as the err
argument to the appropriate callback
(for example, for fromRandomAccessReader()
).
If you call readStream.destroy()
on streams you get from openReadStream()
,
the returned stream must implement a method ._destroy()
according to https://nodejs.org/api/stream.html#writable_destroyerr-callback .
If you never call readStream.destroy()
, then streams returned from this method do not need to implement a method ._destroy()
.
._destroy()
should abort any streaming that is in progress and clean up any associated resources.
._destroy()
will only be called after the stream has been unpipe()
d from its destination.
Note that the stream returned from this method might not be the same object that is provided by openReadStream()
.
The stream returned from this method might be pipe()
d through one or more filter streams (for example, a zlib inflate stream).
Subclasses may implement this method.
The default implementation uses createReadStream()
to fill the buffer
.
This method should behave like fs.read()
.
Subclasses may implement this method.
The default implementation is effectively setImmediate(callback);
.
callback
takes parameters (err)
.
This method is called once the all streams returned from _readStreamForRange()
have ended,
and no more _readStreamForRange()
or read()
requests will be issued to this object.
When a malformed zipfile is encountered, the default behavior is to crash (throw an exception). If you want to handle errors more gracefully than this, be sure to do the following:
callback
parameters where they are allowed, and check the err
parameter.error
event on any ZipFile
object you get from open()
, fromFd()
, fromBuffer()
, or fromRandomAccessReader()
.error
event on any stream you get from openReadStream()
.Minor version updates to yauzl will not add any additional requirements to this list.
The automated tests for this project run on node versions 12 and up. Older versions of node are not supported.
For a lengthy discussion, see issue #69. In summary, the Mac Archive Utility is buggy when creating large zip files, and this library does not make any effort to work around the bugs. This library will attempt to interpret the zip file data at face value, which may result in errors, or even silently incomplete data. If this bothers you, that's good! Please complain to Apple. :) I have accepted that this library will simply not support that nonsense.
Due to the design of the .zip file format, it's impossible to interpret a .zip file from start to finish (such as from a readable stream) without sacrificing correctness. The Central Directory, which is the authority on the contents of the .zip file, is at the end of a .zip file, not the beginning. A streaming API would need to either buffer the entire .zip file to get to the Central Directory before interpreting anything (defeating the purpose of a streaming interface), or rely on the Local File Headers which are interspersed through the .zip file. However, the Local File Headers are explicitly denounced in the spec as being unreliable copies of the Central Directory, so trusting them would be a violation of the spec.
Any library that offers a streaming unzip API must make one of the above two compromises, which makes the library either dishonest or nonconformant (usually the latter). This library insists on correctness and adherence to the spec, and so does not offer a streaming API.
Here is a way to create a spec-conformant .zip file using the zip
command line program (Info-ZIP)
available in most unix-like environments, that is (nearly) impossible to parse correctly with a streaming parser:
$ echo -ne '\x50\x4b\x07\x08\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00' > file.txt
$ zip -q0 - file.txt | cat > out.zip
This .zip file contains a single file entry that uses General Purpose Bit 3, which means the Local File Header doesn't know the size of the file. Any streaming parser that encounters this situation will either immediately fail, or attempt to search for the Data Descriptor after the file's contents. The file's contents is a sequence of 16-bytes crafted to exactly mimic a valid Data Descriptor for an empty file, which will fool any parser that gets this far into thinking that the file is empty rather than containing 16-bytes. What follows the file's real contents is the file's real Data Descriptor, which will likely cause some kind of signature mismatch error for a streaming parser (if one hasn't occurred already).
By using General Purpose Bit 3 (and compression method 0), it's possible to create arbitrarily ambiguous .zip files that distract parsers with file contents that contain apparently valid .zip file metadata.
For ZIP64, only zip files smaller than 8PiB
are supported,
not the full 16EiB
range that a 64-bit integer should be able to index.
This is due to the JavaScript Number type being an IEEE 754 double precision float.
The Node.js fs
module probably has this same limitation.
The spec does not allow zip file creators to put arbitrary data here, but rather reserves its use for PKWARE and mentions something about Z390. This doesn't seem useful to expose in this library, so it is ignored.
This library does not support multi-disk zip files.
The multi-disk fields in the zipfile spec were intended for a zip file to span multiple floppy disks,
which probably never happens now.
If the "number of this disk" field in the End of Central Directory Record is not 0
,
the open()
, fromFd()
, fromBuffer()
, or fromRandomAccessReader()
callback
will receive an err
.
By extension the following zip file fields are ignored by this library and not provided to clients:
You can detect when a file entry is encrypted with "Traditional Encryption" via isEncrypted()
,
but yauzl will not help you decrypt it.
See openReadStream()
.
If a zip file contains file entries encrypted with "Strong Encryption", yauzl emits an error.
If the central directory is encrypted or compressed, yauzl emits an error.
Many unzip libraries mistakenly read the Local File Header data in zip files. This data is officially defined to be redundant with the Central Directory information, and is not to be trusted. Aside from checking the signature, yauzl ignores the content of the Local File Header.
This library provides the crc32
field of Entry
objects read from the Central Directory.
However, this field is not used for anything in this library.
The field versionNeededToExtract
is ignored,
because this library doesn't support the complete zip file spec at any version,
Regarding the compressionMethod
field of Entry
objects,
only method 0
(stored with no compression)
and method 8
(deflated) are supported.
Any of the other 15 official methods will cause the openReadStream()
callback
to receive an err
.
There may or may not be Data Descriptor sections in a zip file. This library provides no support for finding or interpreting them.
There may or may not be an Archive Extra Data Record section in a zip file. This library provides no support for finding or interpreting it.
Zip files officially support charset encodings other than CP437 and UTF-8, but the zip file spec does not specify how it works. This library makes no attempt to interpret the Language Encoding Flag.
The zip file specification has several ambiguities inherent in its design. Yikes!
.ZIP file comment
must not contain the end of central dir signature
bytes 50 4b 05 06
. This corresponds to the text "PK☺☻"
in CP437. While this is allowed by the specification, yauzl will hopefully reject this situation with an "Invalid comment length" error. However, in some situations unpredictable incorrect behavior will ensue, which will probably manifest in either an invalid signature error or some kind of bounds check error, such as "Unexpected EOF".50 4b 06 07
("PK♠•"
in CP437) exactly 20 bytes from its end, which might be in the file name
, the extra field
, or the file comment
. The presence of these bytes indicates that this is a ZIP64 file.3.2.0
entry.getLastModDate()
takes options forceDosFormat
to revert the above change, and timezone
to allow UTC interpretation of DOS timestamps.dosDateTimeToDate()
as now deprecated.3.1.3
fromBuffer()
to read corrupt zip files that specify out of bounds file offsets. issue #156fromBuffer()
and fromRandomAccessReader()
in addition to open()
, which would have caught the above.3.1.2
3.1.1
3.1.0
readLocalFileHeader()
and Class: LocalFileHeader
.openReadStreamLowLevel()
.getFileNameLowLevel()
and parseExtraFields()
.
Added fields to Class: Entry
: fileNameRaw
, extraFieldRaw
, fileCommentRaw
.examples/compareCentralAndLocalHeaders.js
that demonstrate many of these low level APIs."engines"
field of package.json
.openReadStream()
with an explicitly null
options parameter (as opposed to omitted).3.0.0
destroy
method must instead implement _destroy
in accordance with the node standard https://nodejs.org/api/stream.html#writable_destroyerr-callback (note the error and callback parameters). If you continue to override destory
instead, some error handling may be subtly broken. Additionally, this is required for async iterators to work correctly in some versions of node. issue #110fd-slicer
with a 1-line change, rather than depending on it. issue #114bl
dependency; add package-lock.json
; drop deprecated istanbul dependency. This resolves all security warnings for this project. pull #1252.10.0
2.9.2
tools/hexdump-zip.js
and tools/hex2bin.js
. Those tools are now located here: thejoshwolfe/hexdump-zip and thejoshwolfe/hex2binfromBuffer()
and readStream.destroy()
for large compressed files. issue #872.9.1
console.log()
accidentally introduced in 2.9.0. issue #642.9.0
readEntry()
is called without lazyEntries:true
. Previously this caused undefined behavior. issue #632.8.0
2.7.0
2.6.0
2.5.0
2.4.3
2.4.2
2.4.1
2.4.0
2.3.1
2.3.0
uncompressedSize
is correct, or else emit an error. issue #132.2.1
2.2.0
2.1.0
iconv
.2.0.3
2.0.2
2.0.1
iconv
.2.0.0
One of the trickiest things in development is crafting test cases located in test/{success,failure}/
.
These are zip files that have been specifically generated or design to test certain conditions in this library.
I recommend using hexdump-zip to examine the structure of a zipfile.
For making new error cases, I typically start by copying test/success/linux-info-zip.zip
, and then editing a few bytes with a hex editor.
FAQs
yet another unzip library for node
The npm package yauzl receives a total of 13,432,870 weekly downloads. As such, yauzl popularity was classified as popular.
We found that yauzl demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 2 open source maintainers collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Research
Security News
Socket researchers uncover a malicious npm package posing as a tool for detecting vulnerabilities in Etherium smart contracts.
Security News
Research
A supply chain attack on Rspack's npm packages injected cryptomining malware, potentially impacting thousands of developers.
Research
Security News
Socket researchers discovered a malware campaign on npm delivering the Skuld infostealer via typosquatted packages, exposing sensitive data.